Sparse Code Formation with Linear Inhibition

نویسنده

  • Nam Do-Hoang Le
چکیده

Sparse code formation in the primary visual cortex (V1) has been inspiration for many state-ofthe-art visual recognition systems. To stimulate this behavior, networks are trained networks under mathematical constraint of sparsity or selectivity. Meanwhile, there is another line of research which emphasizes the role of lateral connections of neural networks in sparse code formation. Lateral connections are synapses among neurons on the same layer, which is an essential part of human neural networks. There are two types of interconnections. Excitatory connections propagate firing signal across neural layer, thus preserve topographical order of neural stimuli. On the other hand, inhibitory connections decorrelate activations among neurons, which accounts for sparse code formation.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Speech enhancement based on hidden Markov model using sparse code shrinkage

This paper presents a new hidden Markov model-based (HMM-based) speech enhancement framework based on the independent component analysis (ICA). We propose analytical procedures for training clean speech and noise models by the Baum re-estimation algorithm and present a Maximum a posterior (MAP) estimator based on Laplace-Gaussian (for clean speech and noise respectively) combination in the HMM ...

متن کامل

Robust Estimation in Linear Regression with Molticollinearity and Sparse Models

‎One of the factors affecting the statistical analysis of the data is the presence of outliers‎. ‎The methods which are not affected by the outliers are called robust methods‎. ‎Robust regression methods are robust estimation methods of regression model parameters in the presence of outliers‎. ‎Besides outliers‎, ‎the linear dependency of regressor variables‎, ‎which is called multicollinearity...

متن کامل

Low Rate Is Insufficient for Local Testability

Locally testable codes are error-correcting codes for which membership of a given word in the code can be tested probabilistically by examining it in very few locations. A linear code C ⊆ F2 is called sparse if dim(C) = O(log(n)). We say that a code C ⊆ F2 is -biased if all nonzero codewords of C have relative weight in the range ( 1 2 − , 1 2 + ), where may be a function of n. Kaufman and Suda...

متن کامل

Advanced Algorithms December 23 , 2004 Lecture 20 : LT Codes Lecturer : Amin

Random linear fountain codes were introduced in the last lecture as a sparse-graph codes for channels with erasure. It turned out that their encoding and decoding costs were quadratic and cubic in the number of packets encoded. We study Luby Transform (LT) codes in this lecture, pioneered by Michael Luby, that retains the good performance of random linear fountain code, while drastically reduci...

متن کامل

Learning Data Representations with Sparse Coding Neural Gas

We consider the problem of learning an unknown (overcomplete) basis from an unknown sparse linear combination. Introducing the “sparse coding neural gas” algorithm, we show how to employ a combination of the original neural gas algorithm and Oja’s rule in order to learn a simple sparse code that represents each training sample by a multiple of one basis vector. We generalise this algorithm usin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1503.04115  شماره 

صفحات  -

تاریخ انتشار 2015